4: Clustering and Classification

Data set

I will be using the Boston data set from late 1970s, which is about Housing values in suburbs of Boston. First the data set needs to uploaded, it is included in the MASS library.

# access the MASS package
library(MASS)
## Warning: package 'MASS' was built under R version 4.2.2
# load the data
data("Boston")

# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

Data consists of 506 observations and 14 variables. Variables are mostly numerical with 2 integer variables.

Variables include such as crime rate, proportion of owner-occupied old buildings, location related to river, percentage of the lower status population etc.

Graphical overview on the data

#access needed libraries
library(ggplot2)
library(dplyr)
## Warning: package 'dplyr' was built under R version 4.2.2
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:MASS':
## 
##     select
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyr)
## Warning: package 'tidyr' was built under R version 4.2.2
library(tidyverse)
## Warning: package 'tidyverse' was built under R version 4.2.2
## ── Attaching packages
## ───────────────────────────────────────
## tidyverse 1.3.2 ──
## ✔ tibble  3.1.8     ✔ stringr 1.4.1
## ✔ readr   2.1.3     ✔ forcats 0.5.2
## ✔ purrr   0.3.5
## Warning: package 'tibble' was built under R version 4.2.2
## Warning: package 'readr' was built under R version 4.2.2
## Warning: package 'purrr' was built under R version 4.2.2
## Warning: package 'stringr' was built under R version 4.2.2
## Warning: package 'forcats' was built under R version 4.2.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ✖ dplyr::select() masks MASS::select()
library(GGally)
## Warning: package 'GGally' was built under R version 4.2.2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
# create plot matrixes with ggpairs()
p <- ggpairs(Boston[,1:5], lower = list(combo = wrap("facethist", bins = 20)))
p2 <- ggpairs(Boston[,6:10], lower = list(combo = wrap("facethist", bins = 20)))
# draw the plots
p

p2

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Variable distributions differ greatly. In these plots showing half of the relationships, it seems that most of the variables are correlated with each other.

Data set standardization

First, I will standardize the data set and view its summaries.

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

After scaling mean for each variable is 0, in addition variances for different variables became more uniform.

Now, categorical crime variable will be created and categorical break points are created based on quantiles from crim variable stating per capita crime rate by town. Data set is divided to train and test sets, so that 80% of randomly selected observations belong to train set.

# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# this is matrix array, so needs to be changes to to data frame for further analysis
boston_scaled <- as.data.frame(boston_scaled)

# create a quantile vector of crime and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime' by using the bins just created as break points
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)


# Creating random selection of the data, so that 80 % will belong to training set, and the rest 20 % to test set
# number of rows in the Boston data set 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

Linear discriminant analysis

We’ll do linear discriminant analysis to test how closely all other variables are located to crime variable in this data set. The data set was divided into two sets, a train set to teach the model and to test set to test how the model works. First, we will create the model and plot the results.

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2450495 0.2425743 0.2673267 0.2450495 
## 
## Group means:
##                   zn      indus        chas        nox         rm        age
## low       0.82462695 -0.9629199 -0.15302300 -0.8530339  0.4991180 -0.8652826
## med_low  -0.06831306 -0.3066486 -0.03128211 -0.5829613 -0.1295909 -0.3916279
## med_high -0.37448920  0.1523082  0.23803578  0.3765891  0.1284284  0.4185455
## high     -0.48724019  1.0171737 -0.11325431  1.0260898 -0.4123228  0.7809795
##                 dis        rad        tax    ptratio      black       lstat
## low       0.8446695 -0.6999747 -0.7500957 -0.3615825  0.3802808 -0.80113328
## med_low   0.4229971 -0.5342034 -0.4567056 -0.0379059  0.3132701 -0.12723227
## med_high -0.3687254 -0.4384762 -0.3247231 -0.3143421  0.1282033  0.01100725
## high     -0.8465899  1.6375616  1.5136504  0.7801170 -0.8708212  0.86529978
##                  medv
## low       0.542388751
## med_low  -0.003234181
## med_high  0.184193363
## high     -0.682964019
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.08543998  0.53670332 -0.92890682
## indus    0.11133616 -0.49822845  0.62264352
## chas    -0.11407324 -0.06781675  0.06536506
## nox      0.28915516 -0.77310506 -1.30296446
## rm      -0.13283651 -0.09071654 -0.18191018
## age      0.19778684 -0.35812770 -0.15498672
## dis     -0.05705294 -0.25740009  0.33468681
## rad      3.38383723  0.70843995 -0.04982690
## tax      0.04180382  0.34631313  0.37806867
## ptratio  0.06543837  0.01886922 -0.31558362
## black   -0.11443741  0.01948914  0.14864380
## lstat    0.18361563 -0.20772287  0.47507702
## medv     0.15671772 -0.37518658 -0.05936513
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9516 0.0362 0.0122
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)

Observations categorized as high seem to cluster together far apart from other variables, even though there are some med_high observations as part of that cluster. Low, med_low, and med_high clusters are partly overlapping, yet they can still be distinguished from each other. Accessibility to radial high ways (variable rad) seems to have the highest impact on crime category “high”.

Further, train model will be used to predict categories in test data.

# save the crime categories from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       20       6        2    0
##   med_low    7      15        6    0
##   med_high   0       3       13    2
##   high       0       0        0   28

Predictions were most accurate for high category, where everything (23) went to the right category. For med_high about half of the predictions were correct (17), and most mis-categorized were in med_low. For med_low about 2/3 of the predictions were correct (14) and most mis-categorized predictions were in med_high. For low less than half of the predictions were correctly located to low category (12) and almost all other predictions were categorized as med_low. It seems that quite often low and med_high categories are predicted as med_low.

All other variables together are not super good at predicting the crime variable. This is constant with the lda result from train data. Those above mentioned categories are overlapping and hence their prediction is difficult.

K-means clustering

# reload the data
data("Boston")
# standardize data
boston_rescaled <- scale(Boston)

Distances between observations are calculated by two different methods: euclidean distance matrix and manhattan distance matrix.

# euclidean distance matrix
dist_eu <- dist(boston_rescaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_rescaled, method = "manhattan")

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618
# k-means clustering
km <- kmeans(boston_rescaled, centers = 3)

# plot the Boston dataset with clusters
pairs(boston_rescaled[,1:5], col = km$cluster)

pairs(boston_rescaled[,6:10], col = km$cluster)

Here, I will investigate what is the optimal number of clusters.

set.seed(123)

# determine the number of clusters
k_max <- 8

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_rescaled, k)$tot.withinss}) 

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line') 
## Warning: `qplot()` was deprecated in ggplot2 3.4.0.

Investigation shows that 2 is the optimal number for clusters, as there is an edgy turn on the graph. Let’s use that and visualize the clusters.

# k-means clustering
km2 <- kmeans(boston_rescaled, centers = 2)

# plot the Boston dataset with clusters
pairs(boston_rescaled[,1:5], col = km2$cluster)

pairs(boston_rescaled[,6:10], col = km2$cluster)

For some of the variables cluster colors do seems to fit the clusters nicely. For others, clusters are intermixed with each other and hence not so good at separating observations. When this version is compared with the three cluster version, this seems more tidy.

3D plots for train data

Here, I will plot the train data in 3D. I will plot the data twice, first by using crime categories as coloring and the using k-means clusters for coloring using the same number of clusters to see the effect of coloring strategy on the plot.

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# Access the plotly package. Create a 3D plot of the columns of the matrix product using the crime classes in train data.
library(plotly)
## Warning: package 'plotly' was built under R version 4.2.2
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = classes)
# Draw similar plot, but use k-means for setting colors.
# remove the crime variable from train data to run k-means
# use centers = 4 to compare the effect of k-means vs quantile in setting colors
trainkm <- dplyr::select(train, -crime)
kmtrain <- kmeans(trainkm, centers = 4)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = kmtrain$cluster)

Observations are located similarly in both plots. There is some difference in the coloring of the clusters. Here, I used same number of clusters for both to compare the effect of different clustering method. To summarize, observations are the same in both plots, yet coloring of them is different as quantile is set by the probability of those observations in belonging to particular fraction. K-mean on the other hand tries to minimize the mean distance between geometric points, that is, it is more sensitive for the composition of the data.